7 research outputs found

    Slice and Dice: A Physicalization Workflow for Anatomical Edutainment

    Get PDF
    During the last decades, anatomy has become an interesting topic in education---even for laymen or schoolchildren. As medical imaging techniques become increasingly sophisticated, virtual anatomical education applications have emerged. Still, anatomical models are often preferred, as they facilitate 3D localization of anatomical structures. Recently, data physicalizations (i.e., physical visualizations) have proven to be effective and engaging---sometimes, even more than their virtual counterparts. So far, medical data physicalizations involve mainly 3D printing, which is still expensive and cumbersome. We investigate alternative forms of physicalizations, which use readily available technologies (home printers) and inexpensive materials (paper or semi-transparent films) to generate crafts for anatomical edutainment. To the best of our knowledge, this is the first computer-generated crafting approach within an anatomical edutainment context. Our approach follows a cost-effective, simple, and easy-to-employ workflow, resulting in assemblable data sculptures (i.e., semi-transparent sliceforms). It primarily supports volumetric data (such as CT or MRI), but mesh data can also be imported. An octree slices the imported volume and an optimization step simplifies the slice configuration, proposing the optimal order for easy assembly. A packing algorithm places the resulting slices with their labels, annotations, and assembly instructions on a paper or transparent film of user-selected size, to be printed, assembled into a sliceform, and explored. We conducted two user studies to assess our approach, demonstrating that it is an initial positive step towards the successful creation of interactive and engaging anatomical physicalizations

    The iCoCooN: integration of cobweb charts with parallel coordinates for visual analysis of DCE-MRI modeling variations

    No full text
    Efficacy of radiotherapy treatment depends on the specific characteristics of tumorous tissues. For the determination of these characteristics, clinical practice uses Dynamic Contrast Enhanced (DCE) Magnetic Resonance Imaging (MRI). DCE-MRI data is acquired and modeled using pharmacokinetic modeling, to derive per voxel a set of parameters, indicative of tissue properties. Different pharmacokinetic modeling approaches make different assumptions, resulting in parameters with different distributions. A priori, it is not known whether there are significant differences between modeling assumptions and which assumption is best to apply. Therefore, clinical researchers need to know at least how different choices in modeling affect the resulting pharmacokinetic parameters and also where parameter variations appear. In this paper, we introduce iCoCooN: a visualization application for the exploration and analysis of model-induced variations in pharmacokinetic parameters.We designed a visual representation, the Cocoon, by integrating perpendicularly Parallel Coordinate Plots (PCPs) with Cobweb Charts (CCs). PCPs display the variations in each parameter between modeling choices, while CCs present the relations in a whole parameter set for each modeling choice. The Cocoon is equipped with interactive features to support the exploration of all data aspects in a single combined view. Additionally, interactive brushing allows to link the observations from the Cocoon to the anatomy. We conducted evaluations with experts and also general users. The clinical experts judged that the Cocoon in combination with its features facilitates the exploration of all significant information and, especially, enables them to find anatomical correspondences. The results of the evaluation with general users indicate that the Cocoon produces more accurate results compared to independent multiples

    PREVIS: Predictive visual analytics of anatomical variability for radiotherapy decision support

    Get PDF
    Radiotherapy (RT) requires meticulous planning prior to treatment, where the RT plan is optimized with organ delineations on a pre-treatment Computed Tomography (CT) scan of the patient. The conventionally fractionated treatment usually lasts several weeks. Random changes (e.g., rectal and bladder filling in prostate cancer patients) and systematic changes (e.g., weight loss) occur while the patient is being treated. Therefore, the delivered dose distribution may deviate from the planned. Modern technology, in particular image guidance, allows to minimize these deviations, but risks for the patient remain. We present PREVIS: a visual analytics tool for (i) the exploration and prediction of changes in patient anatomy during the upcoming treatment, and (ii) the assessment of treatment strategies, with respect to the anticipated changes. Records of during-treatment changes from a retrospective imaging cohort with complete data are employed in PREVIS, to infer expected anatomical changes of new incoming patients with incomplete data, using a generative model. Abstracted representations of the retrospective cohort partitioning provide insight into an underlying automated clustering, showing main modes of variation for past patients. Interactive similarity representations support an informed selection of matching between new incoming patients and past patients. A Principal Component Analysis (PCA)-based generative model describes the predicted spatial probability distributions of the incoming patient's organs in the upcoming weeks of treatment, based on observations of past patients. The generative model is interactively linked to treatment plan evaluation, supporting the selection of the optimal treatment strategy. We present a usage scenario, demonstrating the applicability of PREVIS in a clinical research setting, and we evaluate our visual analytics tool with eight clinical researchers

    Breast cancer patient characterisation and visualisation using deep learning and fisher information networks

    Get PDF
    Breast cancer is the most commonly diagnosed female malignancy globally, with better survival rates if diagnosed early. Mammography is the gold standard in screening programmes for breast cancer, but despite technological advances, high error rates are still reported. Machine learning techniques, and in particular deep learning (DL), have been successfully used for breast cancer detection and classification. However, the added complexity that makes DL models so successful reduces their ability to explain which features are relevant to the model, or whether the model is biased. The main aim of this study is to propose a novel visualisation to help characterise breast cancer patients using Fisher Information Networks on features extracted from mammograms using a DL model. In the proposed visualisation, patients are mapped out according to their similarities and can be used to study new patients as a 'patient-like-me' approach. When applied to the CBIS-DDSM dataset, it was shown that it is a competitive methodology that can (i) facilitate the analysis and decision-making process in breast cancer diagnosis with the assistance of the FIN visualisations and 'patient-like-me' analysis, and (ii) help improve diagnostic accuracy and reduce overdiagnosis by identifying the most likely diagnosis based on clinical similarities with neighbouring patients

    O. Baldacci, Educazione geografica permanente, Bologna, P\ue0tron, 1982, 331 pp. (L. 15.000)

    Get PDF
    Purpose: In orthopaedics, minimally invasive injection of bone cement is an established technique. We present HipRFX, a software tool for planning and guiding a cement injection procedure for stabilizing a loosening hip prosthesis. HipRFX works by analysing a pre-operative CT and intraoperative C-arm fluoroscopic images. Methods: HipRFX simulates the intraoperative fluoroscopic views that a surgeon would see on a display panel. Structures are rendered by modelling their X-ray attenuation. These are then compared to actual fluoroscopic images which allow cement volumes to be estimated. Five human cadaver legs were used to validate the software in conjunction with real percutaneous cement injection into artificially created periprothetic lesions. Results: Based on intraoperatively obtained fluoroscopic images, our software was able to estimate the cement volume that reached the pre-operatively planned targets. The actual median target lesion volume was 3.58 ml (range 3.17–4.64 ml). The median error in computed cement filling, as a percentage of target volume, was 5.3 % (range 2.2–14.8 %). Cement filling was between 17.6 and 55.4 % (median 51.8 %). Conclusions: As a proof of concept, HipRFX was capable of simulating intraoperative fluoroscopic C-arm images. Furthermore, it provided estimates of the fraction of injected cement deposited at its intended target location, as opposed to cement that leaked away. This level of knowledge is usually unavailable to the surgeon viewing a fluoroscopic image and may aid in evaluating the success of a percutaneous cement injection intervention
    corecore